Search results for "Video processing"

showing 10 items of 56 documents

PerceptNet: A Human Visual System Inspired Neural Network for Estimating Perceptual Distance

2019

Traditionally, the vision community has devised algorithms to estimate the distance between an original image and images that have been subject to perturbations. Inspiration was usually taken from the human visual perceptual system and how the system processes different perturbations in order to replicate to what extent it determines our ability to judge image quality. While recent works have presented deep neural networks trained to predict human perceptual quality, very few borrow any intuitions from the human visual system. To address this, we present PerceptNet, a convolutional neural network where the architecture has been chosen to reflect the structure and various stages in the human…

FOS: Computer and information sciencesComputer Science - Machine LearningVisual perceptionComputer scienceImage qualitymedia_common.quotation_subjectFeature extractionMachine Learning (stat.ML)02 engineering and technology01 natural sciencesConvolutional neural networkhuman visual systemMachine Learning (cs.LG)010309 opticsStatistics - Machine LearningPerception0103 physical sciences0202 electrical engineering electronic engineering information engineeringFOS: Electrical engineering electronic engineering information engineeringperceptual distancemedia_commonArtificial neural networkbusiness.industryDeep learningImage and Video Processing (eess.IV)Pattern recognitionElectrical Engineering and Systems Science - Image and Video Processingneural networksHuman visual system model020201 artificial intelligence & image processingArtificial intelligencebusiness
researchProduct

CrowdVAS-Net: A Deep-CNN Based Framework to Detect Abnormal Crowd-Motion Behavior in Videos for Predicting Crowd Disaster

2019

With the increased occurrences of crowd disasters like human stampedes, crowd management and their safety during mass gathering events like concerts, congregation or political rally, etc., are vital tasks for the security personnel. In this paper, we propose a framework named as CrowdVAS-Net for crowd-motion analysis that considers velocity, acceleration and saliency features in the video frames of a moving crowd. CrowdVAS-Net relies on a deep convolutional neural network (DCNN) for extracting motion and appearance feature representations from the video frames that help us in classifying the crowd-motion behavior as abnormal or normal from a short video clip. These feature representations a…

Computer sciencebusiness.industryFeature extraction020207 software engineering02 engineering and technologyVideo processingMachine learningcomputer.software_genreConvolutional neural networkMotion (physics)Random forestFeature (computer vision)Mass gathering0202 electrical engineering electronic engineering information engineeringTask analysis020201 artificial intelligence & image processingArtificial intelligencebusinesscomputer2019 IEEE International Conference on Systems, Man and Cybernetics (SMC)
researchProduct

Quantifying Vegetation Biophysical Variables from Imaging Spectroscopy Data: A Review on Retrieval Methods

2019

An unprecedented spectroscopic data stream will soon become available with forthcoming Earth-observing satellite missions equipped with imaging spectroradiometers. This data stream will open up a vast array of opportunities to quantify a diversity of biochemical and structural vegetation properties. The processing requirements for such large data streams require reliable retrieval techniques enabling the spatiotemporally explicit quantification of biophysical variables. With the aim of preparing for this new era of Earth observation, this review summarizes the state-of-the-art retrieval methods that have been applied in experimental imaging spectroscopy studies inferring all kinds of vegeta…

Data streamEarth observation010504 meteorology & atmospheric sciencesComputer scienceUT-Hybrid-D010502 geochemistry & geophysicscomputer.software_genreQuantitative Biology - Quantitative Methods01 natural sciencesArticleGeochemistry and PetrologyFOS: Electrical engineering electronic engineering information engineeringQuantitative Methods (q-bio.QM)0105 earth and related environmental sciencesParametric statisticsData stream miningImage and Video Processing (eess.IV)Electrical Engineering and Systems Science - Image and Video Processing15. Life on land22/4 OA procedureRegressionImaging spectroscopyGeophysicsSpectroradiometer13. Climate actionMulticollinearityFOS: Biological sciencesITC-ISI-JOURNAL-ARTICLEData miningcomputerSurveys in Geophysics
researchProduct

Deep Generative Model-Driven Multimodal Prostate Segmentation in Radiotherapy

2019

Deep learning has shown unprecedented success in a variety of applications, such as computer vision and medical image analysis. However, there is still potential to improve segmentation in multimodal images by embedding prior knowledge via learning-based shape modeling and registration to learn the modality invariant anatomical structure of organs. For example, in radiotherapy automatic prostate segmentation is essential in prostate cancer diagnosis, therapy, and post-therapy assessment from T2-weighted MR or CT images. In this paper, we present a fully automatic deep generative model-driven multimodal prostate segmentation method using convolutional neural network (DGMNet). The novelty of …

FOS: Computer and information sciencesComputer scienceComputer Vision and Pattern Recognition (cs.CV)medicine.medical_treatmentProstate segmentationFeature extractionComputer Science - Computer Vision and Pattern RecognitionComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONConvolutional neural network[SDV.IB.MN]Life Sciences [q-bio]/Bioengineering/Nuclear medicineConvolutional neural network030218 nuclear medicine & medical imaging03 medical and health sciencesProstate cancer0302 clinical medicineFOS: Electrical engineering electronic engineering information engineeringmedicineSegmentationArtificial neural networkbusiness.industryDeep learningImage and Video Processing (eess.IV)NoveltyDeep learningPattern recognitionElectrical Engineering and Systems Science - Image and Video Processingmedicine.diseaseTransfer learning3. Good healthRadiation therapyGenerative model030220 oncology & carcinogenesisEmbeddingArtificial intelligencebusinessCTMRI
researchProduct

A Bayesian Multilevel Random-Effects Model for Estimating Noise in Image Sensors

2020

Sensor noise sources cause differences in the signal recorded across pixels in a single image and across multiple images. This paper presents a Bayesian approach to decomposing and characterizing the sensor noise sources involved in imaging with digital cameras. A Bayesian probabilistic model based on the (theoretical) model for noise sources in image sensing is fitted to a set of a time-series of images with different reflectance and wavelengths under controlled lighting conditions. The image sensing model is a complex model, with several interacting components dependent on reflectance and wavelength. The properties of the Bayesian approach of defining conditional dependencies among parame…

FOS: Computer and information sciencesMean squared errorC.4Computer scienceBayesian probabilityG.3ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISIONInference02 engineering and technologyBayesian inferenceStatistics - Applications0202 electrical engineering electronic engineering information engineeringFOS: Electrical engineering electronic engineering information engineeringApplications (stat.AP)Electrical and Electronic EngineeringImage sensorI.4.1C.4; G.3; I.4.1Pixelbusiness.industryImage and Video Processing (eess.IV)020206 networking & telecommunicationsPattern recognitionStatistical modelElectrical Engineering and Systems Science - Image and Video ProcessingRandom effects modelNoise62P30 62P35 62F15 62J05Signal Processing020201 artificial intelligence & image processingComputer Vision and Pattern RecognitionArtificial intelligencebusinessSoftware
researchProduct

2015

Visuo-auditory sensory substitution systems are augmented reality devices that translate a video stream into an audio stream in order to help the blind in daily tasks requiring visuo-spatial information. In this work, we present both a new mobile device and a transcoding method specifically designed to sonify moving objects. Frame differencing is used to extract spatial features from the video stream and two-dimensional spatial information is converted into audio cues using pitch, interaural time difference and interaural level difference. Using numerical methods, we attempt to reconstruct visuo-spatial information based on audio signals generated from various video stimuli. We show that de…

Audio signalComputer Networks and Communicationsbusiness.industryComputer scienceSpeech recognitionMotion detectionTranscodingAudio signal flowVideo processingcomputer.software_genreSensory substitutionArtificial IntelligenceHardware and ArchitectureSonificationComputer visionArtificial intelligencebusinessAudio signal processingcomputerSoftwareInformation SystemsFrontiers in ICT
researchProduct

Smart camera design for intensive embedded computing

2005

Computer-assisted vision plays an important role in our society, in various fields such as personal and goods safety, industrial production, telecommunications, robotics, etc. However, technical developments are still rare and slowed down by various factors linked to sensor cost, lack of system flexibility, difficulty of rapidly developing complex and robust applications, and lack of interaction among these systems themselves, or with their environment. This paper describes our proposal for a smart camera with real-time video processing capabilities. A CMOS sensor, processor and, reconfigurable unit associated in the same chip will allow scalability, flexibility, and high performance.

Flexibility (engineering)CMOS sensorbusiness.industryComputer scienceIndustrial productionRoboticsVideo processingEmbedded systemSignal ProcessingScalabilityComputer Vision and Pattern RecognitionSmart cameraArtificial intelligenceElectrical and Electronic EngineeringImage sensorbusinessReal-Time Imaging
researchProduct

Automatic Myocardial Infarction Evaluation from Delayed-Enhancement Cardiac MRI using Deep Convolutional Networks

2020

In this paper, we propose a new deep learning framework for an automatic myocardial infarction evaluation from clinical information and delayed enhancement-MRI (DE-MRI). The proposed framework addresses two tasks. The first task is automatic detection of myocardial contours, the infarcted area, the no-reflow area, and the left ventricular cavity from a short-axis DE-MRI series. It employs two segmentation neural networks. The first network is used to segment the anatomical structures such as the myocardium and left ventricular cavity. The second network is used to segment the pathological areas such as myocardial infarction, myocardial no-reflow, and normal myocardial region. The segmented …

FOS: Computer and information sciencesComputer Vision and Pattern Recognition (cs.CV)Image and Video Processing (eess.IV)Computer Science - Computer Vision and Pattern Recognitioncardiovascular systemFOS: Electrical engineering electronic engineering information engineeringcardiovascular diseasesElectrical Engineering and Systems Science - Image and Video Processing
researchProduct

3D landmark detection for augmented reality based otologic procedures

2019

International audience; Ear consists of the smallest bones in the human body and does not contain significant amount of distinct landmark points that may be used to register a preoperative CT-scan with the surgical video in an augmented reality framework. Learning based algorithms may be used to help the surgeons to identify landmark points. This paper presents a convolutional neural network approach to landmark detection in preoperative ear CT images and then discusses an augmented reality system that can be used to visualize the cochlear axis on an otologic surgical video.

[INFO.INFO-AI] Computer Science [cs]/Artificial Intelligence [cs.AI]FOS: Computer and information sciences[INFO.INFO-CV] Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]Computer Vision and Pattern Recognition (cs.CV)Image and Video Processing (eess.IV)Computer Science - Computer Vision and Pattern Recognition[INFO.INFO-IM] Computer Science [cs]/Medical ImagingFOS: Electrical engineering electronic engineering information engineering[INFO.INFO-IM]Computer Science [cs]/Medical Imaging[INFO.INFO-CV]Computer Science [cs]/Computer Vision and Pattern Recognition [cs.CV]Electrical Engineering and Systems Science - Image and Video Processing[INFO.INFO-AI]Computer Science [cs]/Artificial Intelligence [cs.AI]
researchProduct

Fully automated analysis of muscle architecture from B-mode ultrasound images with deep learning

2020

B-mode ultrasound is commonly used to image musculoskeletal tissues, but one major bottleneck is data interpretation, and analyses of muscle thickness, pennation angle and fascicle length are often still performed manually. In this study we trained deep neural networks (based on U-net) to detect muscle fascicles and aponeuroses using a set of labelled musculoskeletal ultrasound images. We then compared neural network predictions on new, unseen images to those obtained via manual analysis and two existing semi/automated analysis approaches (SMA and Ultratrack). With a GPU, inference time for a single image with the new approach was around 0.7s, compared to 4.6s with a CPU. Our method detects…

FOS: Computer and information sciencesComputer Vision and Pattern Recognition (cs.CV)Image and Video Processing (eess.IV)Computer Science - Computer Vision and Pattern RecognitionFOS: Electrical engineering electronic engineering information engineeringElectrical Engineering and Systems Science - Image and Video Processing
researchProduct